6 research outputs found

    Selective subtraction for handheld cameras

    Get PDF
    © 2013 IEEE. Background subtraction techniques model the background of the scene using the stationarity property and classify the scene into two classes namely foreground and background. In doing so, most moving objects become foreground indiscriminately, except in dynamic scenes (such as those with some waving tree leaves, water ripples, or a water fountain), which are typically \u27learned\u27 as part of the background using a large training set of video data. We introduce a novel concept of background as the objects other than the foreground, which may include moving objects in the scene that cannot be learned from a training set because they occur only irregularly and sporadically, e.g. a walking person. We propose a \u27selective subtraction\u27 method as an alternative to standard background subtraction, and show that a reference plane in a scene viewed by two cameras can be used as the decision boundary between foreground and background. In our definition, the foreground may actually occur behind a moving object. Furthermore, the reference plane can be selected in a very flexible manner, using for example the actual moving objects in the scene, if needed. We extend this idea to allow multiple reference planes resulting in multiple foregrounds or backgrounds. We present diverse set of examples to show that: 1) the technique performs better than standard background subtraction techniques without the need for training, camera calibration, disparity map estimation, or special camera configurations; 2) it is potentially more powerful than standard methods because of its flexibility of making it possible to select in real-time what to filter out as background, regardless of whether the object is moving or not, or whether it is a rare event or a frequent one. Furthermore, we show that this technique is relatively immune to camera motion and performs well for hand-held cameras

    Blind Blur Estimation Using Low Rank Approximation Of Cepstrum

    No full text
    The quality of image restoration from degraded images is highly dependent upon a reliable estimate of blur. This paper proposes a blind blur estimation technique based on the low rank approximation of cepstrum. The key idea that this paper presents is that the blur functions usually have low ranks when compared with ranks of real images and can be estimated from cepstrum of degraded images. We extend this idea and propose a general framework for estimation of any type of blur. We show that the proposed technique can correctly estimate commonly used blur types both in noiseless and noisy cases. Experimental results for a wide variety of conditions i.e., when images have low resolution, large blur support, and low signal-to-noise ratio, have been presented to validate our proposed method. © Springer-Verlag Berlin Heidelberg 2006

    Selective Subtraction When The Scene Cannot Be Learned

    No full text
    Background subtraction techniques model the background of the scene using the stationarity property and classify the scene into two classes of foreground and background. In doing so, most moving objects become foreground indiscriminately, except for perhaps some waving tree leaves, water ripples, or a water fountain, which are typically learned as part of the background using a large training set of video data. We introduce a novel concept of background as the objects other than the foreground, which may include moving objects in the scene that cannot be learned from a training set because they occur only irregularly and sporadically, e.g. a walking person. We propose a selective subtraction method as an alternative to standard background subtraction, and show that a reference plane in a scene viewed by two cameras can be used as the decision boundary between foreground and background. In our definition, the foreground may actually occur behind a moving object. Furthermore, the reference plane can be selected in a very flexible manner, using for example the actual moving objects in the scene, if needed. We present diverse set of examples to show that: (i) the technique performs better than standard background subtraction techniques without the need for training, camera calibration, disparity map estimation, or special camera configurations; (ii) it is potentially more powerful than standard methods because of its flexibility of making it possible to select in real-time what to filter out as background, regardless of whether the object is moving or not, or whether it is a rare event or a frequent one. © 2011 IEEE

    Single-Class Svm For Dynamic Scene Modeling

    No full text
    Scene modeling is the starting point and thus the most crucial stage for many vision-based systems involving tracking or recognition. Most of the existing approaches attempt at solving this problem by making some simplifying assumptions such as that of a stationary background. However, this might not always be the case, as swaying trees or ripples in the water often violate these assumptions. In this paper, we present a novel method for modeling background of a dynamic scene, i. e., scenes that contain non-stationary background motions, such as periodic motions (e. g., pendulums or escalators) or dynamic textures (e. g., water fountain in the background, swaying trees, or water ripples, etc.). The paper proposes single-class support vector machine (SVM), and we show why it is preferable to other scene modeling techniques currently in use for this particular problem. Using a rectangular region around a pixel, spatial and appearance-based features are extracted from limited amount of training data, used for learning the SVMs. These features are unique, easy to compute and immune to rotation, and changes in scale and illumination. We experiment on a diverse set of dynamic scenes and present both qualitative and quantitative results, indicating the practicality and the effectiveness of the proposed method. © 2011 Springer-Verlag London Limited
    corecore